-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[preview] Decrease preview env density #4750
Conversation
Should we have an equivalent change in the other file/(s) too? |
Are we deploying multiple preview environments to them? If yes: maybe. But let's start with the problems we know about. |
.werft/values.dev.yaml
Outdated
resources: | ||
default: | ||
# as opposed to 200Mi, the default | ||
memory: 500Mi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why 500Mi? Is there some kind of calculation behind this akin to #4700 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No calculation as I assumed assignment of static pods to be too dynamic to reason about.
Let's try. To come as close as possible to the theoretical limit of 33 preview envs are goals are:
- don't consume too much RAM to drive node count up artificially
- ensure we never reach "max nr of pods" with static deployments
Thus we make them big enough so that say 100 pods fill up the whole node (we ignore the DaemonSets here because they are quite small).
=> 32Gi / 100 ~ 328Mi
If we choose 350Mi
we're coming close enough it seems.
WDYT? @csweichel
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good to me. Let's do this
Yes we are deploying multiple preview env to k3s ws cluster |
But we're not having trouble with it atm, plus not all preview envs deploy to that cluster I recon. Let's wait until we have problems, we can quickly come back to this, then. |
efa62f8
to
70dab6f
Compare
70dab6f
to
5fa2cb1
Compare
No description provided.